Quantitative Big Imaging

Kevin Mader
20 March 2014

ETHZ: 227-0966-00L

Analysis of Complex Objects

Course Outline

  • 20th February - Introductory Lecture
  • 27th February - Filtering and Image Enhancement (A. Kaestner)
  • 6th March - Basic Segmentation, Discrete Binary Structures
  • 13th March - Advanced Segmentation
  • 20th March - Analyzing Single Objects
  • 27th March - Analyzing Complex Objects
  • 3rd April - Spatial Distribution
  • 10th April - Statistics and Reproducibility
  • 17th April - Dynamic Experiments
  • 8th May - Big Data
  • 15th May - Guest Lecture - Applications in Material Science
  • 22th May - Project Presentations

Literature / Useful References

  • Jean Claude, Morphometry with R
  • Online through ETHZ
  • Buy it
  • John C. Russ, “The Image Processing Handbook”,(Boca Raton, CRC Press)
  • Available online within domain ethz.ch (or proxy.ethz.ch / public VPN)

Previously on QBI ...

  • Image Enhancment
    • Highlighting the contrast of interest in images
    • Minimizing Noise
  • Segementation
    • Understanding value histograms
    • Dealing with multi-valued data
  • Automatic Methods
    • Hysteresis Method, K-Means Analysis
  • Regions of Interest
    • Contouring
  • Component Labeling
  • Single Shape Analysis

Outline

  • Motivation (Why and How?)
  • What are Distance Maps?
  • Skeletons
    • Tortuosity
  • What are thickness maps?
    • Thickness with Skeletons
  • Watershed Segmentation
    • Connected Objects
  • Curvature
    • Characteristic Shapes

Learning Objectives

Motivation (Why and How?)

  • How do we measure distances between many objects?
  • How can we extract topology of a structure?

  • How can we measure sizes in complicated objects?

  • How do we measure sizes relavant for diffusion or other local processes?

  • How do we identify seperate objects when they are connected?

  • How do we investigate surfaces in more detail and their shape?

  • How can we compare shape of complex objects when they grow?

    • Are there characteristic shape metrics?

What did we want in the first place

To simplify our data, but an ellipse model is too simple for many shapes

So while bounding box and ellipse-based models are useful for many object and cells, they do a very poor job with the sample below.

Single Cell

Why

  • We assume an entity consists of connected pixels (wrong)
  • We assume the objects are well modeled by an ellipse (also wrong)

What to do?

  • Is it 3 connected objects which should all be analzed seperately?
  • If we could divide it, we could then analyze each spart as an ellipse
  • Is it one network of objects and we want to know about the constrictions?
  • Is it a cell or organelle with docking sites for cell?
  • Neither extents nor anisotropy are very meaningful, we need a more specific metric which can characterize

Distance Maps

The distance map is an image where each voxel contains a value. For each image there are two possible distance maps.

  1. Foreground where each voxel in the foreground becomes a value for the distance to the closest background voxel.
  2. Background where each voxel in the background becomes a value for the distance to the closest foreground voxel.

Simple Circles

Simple Circles

Simple Circles

Distance Map

One of the most useful components of the distance map is that it is relatively insensitive to small changes in connectivity.

  • Component Labeling would find radically different results for these two images
    • One has 4 small circles
    • One has 1 big blob

Simple Circles

Closer Circles

Circles Histogram

Distance Map of Cell

Foreground

Cell Distance Map

Background

Cell Distance Map

Skeletonization / Networks

For some structures like cellular materials and trabecular bone, we want a more detailed analysis than just thickness. We want to know

  • which structures are connected
  • how they are connected
  • express the network in a simple manner
    • quantify tortuosity
    • branching

Mesh Mask

Skeletonization

The first step is to take the distance transform the structure \[ I_d(x,y) = \textrm{dist}(I(x,y)) \] We can see in this image there are already local maxima that form a sort of backbone which closely maps to what we are interested in.

Distance Map

By using the laplacian filter as an approximate for the derivative operator which finds the values which high local gradients.

\[ \nabla I_{d}(x,y) = (\frac{\delta^2}{\delta x^2}+\frac{\delta^2}{\delta y^2})I_d \approx \underbrace{\begin{bmatrix} -1 & -1 & -1 \\ -1 & 8 & -1 \\ -1 & -1 & -1 \end{bmatrix}}_{\textrm{Laplacian Kernel}} \otimes I_d(x,y) \]

Skeleton

Creating the skeleton

We can locate the local maxima of the structure by setting a minimum surface distance \[ I_d(x,y)>MIN-SLOPE \] and combining it with a minimum slope value \[ \nabla I_d(x,y) > MIN-DIST \]

Thresholds

Harking back to our first lectures, this can be seen as a 2D threshold of the entire dataset.

  • We first make the dataset into a tuple

\[ \textrm{cImg}(x,y) = \langle \underbrace{I_d(x,y)}_1, \underbrace{\nabla I_d(x,y)}_2 \rangle \]

\[ \textrm{skelImage}(x,y) = \] \[ \begin{cases} 1, & \textrm{cImg}_1(x,y)\geq MIN-DIST \\ \textbf{ AND} & \textrm{cImg}_2(x,y)\geq MIN-SLOPE \\ 0, & \textrm{otherwise} \end{cases} \]

Different Thresholds

Skeleton

Skeleton

Pruning

Skeleton

Pruning

  • The structure is a overgrown
  • Stricter 'thresholds'

Thickness Map

Cell Distance Map

Cell Distance Map

Thickness Map

Cell Distance Map

Cell Distance Map

Thickness Map From Skeleton

Calculating the distance map by drawing a sphere at each point is very time consuming (\( O(n^3) \)).

  • The skeleton (last section) is very closely related to the thickness.
  • We found the local maxima in the image using the Laplace
  • We can thus grow the Spheres from these points instead of all
  • Start by instead of thresholding transforming the image to the distance at each point

\[ \textrm{thSkelImage}(x,y) = \] \[ \begin{cases} \textrm{cImg}_1(x,y) , & \textrm{cImg}_1(x,y)\geq MIN-DIST \\ & \& \textrm{ cImg}_2(x,y)\geq MIN-SLOPE \\ 0, & \textrm{otherwise} \end{cases} \]

Skeleton

From Skeleton vs All Points

   user  system elapsed 
  5.057   0.092   5.224 

Thickness from skeleton

   user  system elapsed 
 17.439   0.499  18.637 

Thickness from all points

Statistically Does it Matter

Thickness from all points

It depends

  • Small structures are lost
  • They might not have been very important or noisy anyways
  • Higher values are very similar
id Full Map Skeleton Map
Min. 0.75 1.75
1st Qu. 2.47 2.47
Median 2.66 2.66
Mean 2.67 2.74
3rd Qu. 3.01 3.01
Max. 3.20 3.20

How much can we cut down

   user  system elapsed 
  2.643   0.045   2.742 

Skeleton

Thickness Distributions Compared

id Full Map Skeleton Map Tiny Skeleton Map
Min. 0.75 1.75 2.00
1st Qu. 2.47 2.47 2.50
Median 2.66 2.66 2.66
Mean 2.67 2.74 2.77
3rd Qu. 3.01 3.01 3.01
Max. 3.20 3.20 3.20

Watershed

Watershed is a method for segmenting objects without using component labeling.

  • It utilizes the shape of structures to find objects
  • Imagine how rain would fall on a topographical map

Curvature

Characteristic Shape